ref(core): Remove provider-specific AI span attributes in favor of gen_ai attributes in sentry conventions#20011
Conversation
Semver Impact of This PR🟢 Patch (bug fixes) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨Deps
Nuxt
Other
Bug Fixes 🐛Ci
Node
Other
Documentation 📚
Internal Changes 🔧Core
Deps
Deps Dev
Other
🤖 This preview updates automatically when you update the PR. |
size-limit report 📦
|
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Unused
responseTimestampstill tracked in streaming state- Removed the unused
responseTimestampfield fromStreamingState, its initialization, and both dead assignment sites in stream processing.
- Removed the unused
Or push these changes by commenting:
@cursor push e9dd5d81c7
Preview (e9dd5d81c7)
diff --git a/packages/core/src/tracing/openai/streaming.ts b/packages/core/src/tracing/openai/streaming.ts
--- a/packages/core/src/tracing/openai/streaming.ts
+++ b/packages/core/src/tracing/openai/streaming.ts
@@ -36,8 +36,6 @@
responseId: string;
/** The model name. */
responseModel: string;
- /** The timestamp of the response. */
- responseTimestamp: number;
/** Number of prompt/input tokens used. */
promptTokens: number | undefined;
/** Number of completion/output tokens used. */
@@ -99,7 +97,6 @@
function processChatCompletionChunk(chunk: ChatCompletionChunk, state: StreamingState, recordOutputs: boolean): void {
state.responseId = chunk.id ?? state.responseId;
state.responseModel = chunk.model ?? state.responseModel;
- state.responseTimestamp = chunk.created ?? state.responseTimestamp;
if (chunk.usage) {
// For stream responses, the input tokens remain constant across all events in the stream.
@@ -183,7 +180,6 @@
const { response } = event as { response: OpenAIResponseObject };
state.responseId = response.id ?? state.responseId;
state.responseModel = response.model ?? state.responseModel;
- state.responseTimestamp = response.created_at ?? state.responseTimestamp;
if (response.usage) {
// For stream responses, the input tokens remain constant across all events in the stream.
@@ -227,7 +223,6 @@
finishReasons: [],
responseId: '',
responseModel: '',
- responseTimestamp: 0,
promptTokens: undefined,
completionTokens: undefined,
totalTokens: undefined,This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.
JPeer264
left a comment
There was a problem hiding this comment.
As you already mentioned, I think this should rather be dropped in a major release.
andreiborza
left a comment
There was a problem hiding this comment.
As discussed, it's fine that we drop them but let's add an entry in the changelog to mention these are getting dropped and which equivalents to use if any.
gen_ai attributes in sentry conventions
|
@andreiborza sounds good, added the changelog |


In the openai and anthropic integrations we send multiple provider specific attributes. None of these are part of our sentry conventions and should therefore be emitted. These fall into two categories:
gen_ainamespace:openai.response.id,openai.response.model,openai.usage.prompt_tokens,openai.usage.completion_tokensgen_aiequivalent:openai.response.timestamp,anthropic.response.timestampgen_aiequivalent so we would no longer send this data at all, but since they are not in the semantic conventions we probably shouldn't send them either.According to Hex, none of these attributes are used in any stored queries, dashboards or alerts. We will still call this out in the changelog in case any users rely on this in hooks.
Closes #20015 (added automatically)